O Chake e o chef utilizam Ruby como linguagem de script e portanto dependem que o Ruby esteja instalado na máquina de onde os comandos serão executados.
Ruby version >= 0O Chake em si é uma gem (Pacote do Ruby) que utiliza o Rake para a execução dos comandos ( tasks ). Portanto ambas as gems devem ser instaladas.
Linux
gem install chake
gem install rakePara a execução local, foram preparadas VMs e portanto será necessário baixar e instalar o Vagrant.
As máquinas alvo da instalção devem possuir sudo instalado além de ferramentas como wget e curl. Para permitir a instalação no windows é necessário que a máquina possua o gerenciador de pacotes Chocolatey. A lista de dependencias dos nós alvo é a seguinte:
Windows:
- Chocolatey
- sudo
- wget
- curl
Linux:
- sudo
- wget
- curl├── bootstrap_files
│ └── chefdk_3.6.57-1_amd64.deb
├── config
│ ├── local
│ └── roles
├── config.rb
├── cookbooks
│ ├── basics
│ ├── docker
│ ├── firewalld
│ └── gitlab-ci-runner
├── nodes.d
├── nodes.yaml
├── Rakefile
├── README.md
├── scripts
└── Vagrantfile| Arquivo | Descrição |
|---|---|
| nodes.yaml | Lista dos nós que estão sendo gerenciados e uma lista das receitas que devem ser executadas em cada nó. |
| config.rb | Configurações que serão utilizadas pelo chef-solo. |
| config/ambiente | Configurações de ip e ssh que dever ser utilizadas pelo chake para realizar ssh nos nós. |
| config/roles | Roles que serão utilizados pelo chef-solo. |
| cookbooks | Diretório com as receitas que serão executadas nos nós. |
| Vagrantfile | Arquivo de confiuração do Vagrant. Pode ser usado para rodar e VMs/Containers localmente. |
| scripts | Pasta com scripts para utilização no processo de deploy. |
Para adicionar um novo ambiente, crie uma nova pasta dentro do diretório config com o nome do ambiente. Neste diretório, devem existir três arquivos, são eles:
ips.yaml: Contém os ips dos nós que vão receber as configurações
ssh_config: Contém as configurações para ssh dos nós.
runners.yaml: Contém as configurações dos runners do gitlab que serão aplicadas nos nós.Adicione um novo nó no arquivo nodes.yaml.
gitlab-runner:
run_list:
- role[gitlab-runner]Adicione o ip do novo nó no arquivo ips.yaml dentro da pasta do ambiente que este novo nó se encontra.
gitlab-runner: 10.0.2.10Adicione as configurações de SSH no arquivo ssh_config dentro da pasta do ambiente em que o nó se encontra.
Host gitlab-runner
HostName 127.0.0.1
User vagrant
Port 2222
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
PasswordAuthentication no
IdentityFile /home/alessandrocb/Projetos/chef/.vagrant/machines/gitlab-runner/virtualbox/private_key
IdentitiesOnly yes
LogLevel FATALO comando para aplicar as receitas nos nós é o converge. Para convergir as máquinas o comando é rake converge:<nó> CHAKE_ENV=<ambiente>, ao rodar este comando todas as receitas ou roles aplicados ao nó que estão presentes no nodes.yaml são aplicados a uma máquina. O usuário pode selecionar qual máquina quer convergir passando a opção de nó no comando, como por exemplo rake converge:gitlab-runner ou rake converge:desktop, e ao rodar o comando sem a opção de nó, todas as máquinas serão convergidas, ignorando as configurações locais que possuem o prefixo local://. A variável CHAKE_ENV dita em qual ambiente o converge irá ocorrer, caso essa varável seja ignorada o chake utilizará as configurações presentes em config/local/, caso um ambiente seja especificado, como CHAKE_ENV=ed, o converge irá ocorrer com as especificações dos arquivos em config/ed/.
Para compilar para esses sistemas.
sudo apt-get install gcc-arm-linux-gnueabi É necessário alterar o arquivo de compilação do Kubernetes hack/lib/golang.sh e adicionar flags para tornar possível o cross-compiling para ARMV6 e ARMHF
Mudar a seção:
export GOOS=${platform%/}
export GOARCH=${platform##/}Para:
export GOOS=${platform%/}
export GOARCH=${platform##/}
export GOARM=5 E depois alterar o cross-compiler para:
case "${platform}" in
"linux/arm")
export CGO_ENABLED=1
export CC=arm-linux-gnueabi-gccPara compilar
make all WHAT=cmd/kube-proxy KUBE_VERBOSE=5 KUBE_BUILD_PLATFORMS=linux/arm
make all WHAT=cmd/kubelet KUBE_VERBOSE=5 KUBE_BUILD_PLATFORMS=linux/arm
make all WHAT=cmd/kubectl KUBE_VERBOSE=5 KUBE_BUILD_PLATFORMS=linux/armNão encontrei esse Raspberry para venda e nem alguem que possua essa primeira versão, mas não acredito que consiga rodar o sistema aqui pois a capacidade de hardware é muito baixa.
O Cron é um daemon que executa scripts periodicamente em uma máquina Linux. Por padrão, o arquivo de Cron fica em /var/spool/cron/crontabs e são comandos do shell que são executados em um tempo definido pelo usuário.
O Gitlab-CI por padrão não faz a limpeza das imagens do docker que são utilizadas durante os processos de build nos repositórios. Sendo assim, para evitar de lotar o HD e a RAM da máquina em que as builds são executas, um Cronjob faz a limpeza de todos os containers e imagens que estão suspensas pelo docker.
00 02,12 * * * /usr/bin/docker run --rm -v /var/run/docker.sock:/var/run/docker.sock -e MINIMUM_IMAGES_TO_SAVE=1 -e REMOVE_VOLUMES=1 -v /etc:/etc:ro node:10
00 02,12 * * * /usr/bin/docker system prune -af
00 02,12 * * * /usr/bin/docker system prune --volumes -f
00 02,12 * * * find /gitlab-runner/builds -type f -ctime +2 -delete && find /gitlab-runner/builds -type d -empty -deleteEstes scripts são executados todos os dias as 12:00 PM e as 02:00 AM.
Esta página apresenta os procedimentos de deploy das aplicações do meu mestrado, esses procedimentos são fortemente baseados na utilização das tecnologias Gitlab-CI, Docker e Container Registry e por este motivo elas serão descritas aqui.
O Docker utiliza a virtualização em nível de sistema operacional, onde instâncias virtuais compartilham um único kernel e podem diferir apenas no espaço do usuário. Nesse tipo de virtualização, o sistema convidado deve compartilhar recursos e é vinculado ao kernel do sistema operacional hospedeiro, a única informação que precisa estar em um contêiner é o executável e suas dependências . Contêineres são leves e ganharam muito espaço no mercado com tecnologias como Docker e LXC. A imagem a seguir representa as principais diferenças na arquitetura entre máquinas virtuais e contêineres.
Docker é uma plataforma aberta para desenvolvimento, deploy e execução de aplicativos. O Docker Engine é um aplicativo cliente-servidor composto de três componentes, um servidor em execução como um daemon, uma API REST e uma CLI. O cliente Docker fala com o daemon que constrói os aplicativos em imagens e execute-os como contêineres. O cliente e o daemon podem ser executados no mesmo sistema ou o comunicar usando a API REST sobre soquetes UNIX ou uma interface de rede. Docker também fornece um Registry, chamado Docker Hub, onde as imagens geradas pelos usuários podem ser armazenadas e puxadas para uso. A imagem a seguir exemplifica a arquitetura do Docker.
O Dockerfile é um arquivo, geralmente presente na pasta raiz do repositório, que indica ao Docker Engine como construir uma determinada Imagem. Ele possui uma linguagem simples e composta de algumas palavras reservadas. No geral, um Dockerfile possui instruções de como construir as dependências e executar o projeto. Abaixo é apresentado o Dockerfile do projeto tim-nakedsim.
FROM alpine:latest
ENV FLASK_APP=run.py FLASK_CONFIG=production FLASK_PORT=5000 NODE_ENV=production
RUN apk add --no-cache python3 && \
python3 -m ensurepip && \
rm -r /usr/lib/python*/ensurepip && \
pip3 install --upgrade pip setuptools && \
if [ ! -e /usr/bin/pip ]; then ln -s pip3 /usr/bin/pip ; fi && \
if [ ! -e /usr/bin/python ](-!--e-/usr/bin/python-); then ln -sf /usr/bin/python3 /usr/bin/python; fi && \
rm -r /root/.cache
RUN apk add --update --no-cache python3-dev nodejs nodejs-npm py-mysqldb mysql-dev gcc g++ make git bash
RUN npm install --only=production -g bower
COPY . /parasite
WORKDIR /parasite
RUN pip3 install -r requirements.txt
RUN cd app/static && npm install --only=production && cd ../../
RUN cd app/static && bower --allow-root -P install && cd ../../
EXPOSE 5000
CMD ['python3', 'run.py']As palavras reservadas neste Dockerfile são:
| Comando | Descrição |
|---|---|
| FROM | Deve ser sempre a primeira palavra no Dockerfile, especifica qual Imagem deve ser usada como base para a construção da nova Imagem. |
| COPY | Copia os arquivos de um diretório para um outro alvo, neste caso, copiando do root para a pasta tim-nakedsim. |
| WORKDIR | Seta qual diretório deve ser utilizado como base para a Imagem e os próximos comandos executados. |
| RUN | Executa um comando do shell, neste caso adiciona permissões de execução para os scripts de instalação e realiza a instalação das dependências |
| EXPOSE | Informa ao Docker que o contêiner escuta na porta especificada em tempo de execução. |
| CMD | Define um ponto de execução padrão para o Imagem, os scripts contidos neste comando serão executados assim que o contêiner for iniciado. |
Uma imagem do Docker é composta de um conjunto de camadas, que representam alterações no sistema de arquivos da imagem base, estas alterações estão descritas no arquivo de configuração chamado de Dockerfile. Ao ser executada, é adicionada a imagem uma camada chamada de Container Layer, todas as alterações no contêiner que está em execução são escritas nesta nova camada. A imagem abaixo ilustra a interação entre as camadas de uma imagem e a camada execução do contêiner.
Os contêineres se diferenciam de imagens do Docker apenas pela presença da nova camada de execução, onde as modificações recentes na imagem ficam salvas. Ao deletar um contêiner, sua camada de execução também é removida e as modificações feitas nela se perdem, porém a imagem que foi usada para a execução do contêiner é mantida inalterada. Desta forma, vários contêineres podem ser executados a partir de uma imagem e manterem seus dados independentemente. A imagem abaixo apresenta como um grupo de contêineres podem compartilhar a mesma imagem.
O Registry é uma aplicação server side de alta escalabilidade que armazena e permite ao usuário distribuir imagens do Docker. O Gitlab provê um Container Registry que divide espaço com o repositório, sendo que o limite de espaço é de 10Gb. Neste Registry, as imagens podem ser armazenadas automaticamente utilizando os pipelines do Gitlab-CI, isto pode ser feito através do .gitlab-ci.yml configurando o mesmo para gerar e registrar a imagem.
image_build:
stage: docker_build
before_script:
- docker info
- docker login -u gitlab-ci-token -p ${CI_JOB_TOKEN} ${CI_REGISTRY}
script:
- docker pull ${RELEASE_IMAGE} || true
- docker build --cache-from ${RELEASE_IMAGE} --tag ${CONTAINER_IMAGE}:${CI_COMMIT_SHA} .
- docker push ${CONTAINER_IMAGE}:${CI_COMMIT_SHA}Após o registro da imagem, para utilizá-la são necessários os seguintes comandos.
docker login registry.mrdevops-gitlab.comFazer o download da image mais atual:
docker pull registry.mrdevops-gitlab.com/ParasiteWeb/<repositório>Executar o container:
docker run -tid -p <porta_host>:<porta_guest> --name <nome> registry.mrdevops-gitlab.com/ParasiteWeb/<repositório>O Gitlab-CI é compatível com todos os passos necessários para build, upload e deploy de uma aplicação baseada em contêineres, para que isso ocorra é necessário que dentro da pasta do projeto exista um Dockerfile, depois é necessário que o Gitlab-CI esteja configurado para realizar os passos para construção e registro da imagem, isto pode ser feito seguindo o exemplo abaixo.
variables:
CONTAINER_IMAGE: registry.mrdevops-gitlab.com/${CI_PROJECT_PATH}
RELEASE_IMAGE: registry.mrdevops-gitlab.com/${CI_PROJECT_PATH}:latest
image_build:
stage: docker_build
before_script:
- docker info
- docker login -u gitlab-ci-token -p ${CI_JOB_TOKEN} ${CI_REGISTRY}
script:
- docker pull ${RELEASE_IMAGE} || true
- docker build --cache-from ${RELEASE_IMAGE} --tag ${CONTAINER_IMAGE}:${CI_COMMIT_SHA} .
- docker push ${CONTAINER_IMAGE}:${CI_COMMIT_SHA}
image_release:
stage: release
before_script:
- docker info
- docker login -u gitlab-ci-token -p ${CI_JOB_TOKEN} ${CI_REGISTRY}
script:
- docker pull ${CONTAINER_IMAGE}:${CI_COMMIT_SHA}
- docker tag ${CONTAINER_IMAGE}:${CI_COMMIT_SHA} ${RELEASE_IMAGE}
- docker push ${RELEASE_IMAGE}
deploy_prod:
stage: deploy
before_script:
- apt update -y && apt install sshpass
script:
- sshpass -p ${DEPLOYMENT_SERVER_PASS} ssh -o StrictHostKeyChecking=no -o PreferredAuthentications=password -o PubkeyAuthentication=no ${DEPLOYMENT_SERVER_USER}@${DEPLOYMENT_SERVER_IP} "sudo docker login -u gitlab-ci-token -p ${CI_JOB_TOKEN} ${CI_REGISTRY} && sudo docker stop parasite && sudo docker rm parasite && sudo docker pull ${CI_REGISTRY}/${CI_PROJECT_PATH}:latest && sudo sudo docker run -tid -p 8888:8000 --name parasite ${CI_REGISTRY}/${CI_PROJECT_PATH}:latest"
O Gitlab-Runner é escrito em Go e é executado como binário, sem a necessidade de requisitos para execução além da sua instalação, sendo assim, ele pode ser executado no Linux, Windows e OSX. Para utilizar o runner com o Docker, é necessária a instalação do Docker com a versão superior a v1.5.0.
O Gitlab-Runner implementa um conjunto de executores que podem ser utilizados para construir e executar builds em diferentes cenários de execução. Os runners disponíveis são:
O executor mais simples, ele permite a execução de builds no mesmo ambiente em que o runner está instalado e por consequência, é compatível com qualquer ambiente em que um Gitlab-Runner pode ser instalado.
Utiliza o Docker Engine para cada build, utilizando containers separados e isolados, com configurações predefinidas pelo .gitlab-ci.yml e de acordo com o config.toml.
Utiliza o Virtualbox para criar ambiente limpos a cada build. Nesta configuração é necessário que o Virtualbox exponha um servidor de ssh para que seja possível conectar as máquinas virtuais e que as máquinas possuam um shell compatível com o Bash.
Permite que o Gitlab-CI faça SSH em máquinas remotas e execute as builds. Para isso, é necessário a que as configurações de SSH sejam definidas no arquivo config.toml.
[runners](runners)
executor = "ssh"
[runners.ssh]
host = "example.com"
port = "22"
user = "root"
password = "password"
identity_file = "/path/to/identity/file"
As configurações dos runners ficam no arquivo config/<ambiente>/runners.yaml, esse arquivo tem a seguinte configuração:
runners:
- description: test-runner-1
options:
url: 'https://gitlab.com'
registration_token: 'tZzvXHYxz9LqDRRN4nMv'
tag_list: [ 'test1', 'tag' ]
executor: 'docker'
docker_image: 'node:8'
retries: 1
- description: test-runner-2
options:
url: 'https://gitlab.com'
registration_token: 'tZzvXHYxz9LqDRRN4nMv'
tag_list: [ 'test2', 'tag' ]
executor: 'docker'
docker_image: 'node:8'
retries: 1
Este repositório possui arquivos que permitem a criação de runners localmente usando máquinas virtuais para testar a configutação das receitas. O arquivo Vagrantfile possui configurações para instanciar uma máquina virtual, para isto é necessário apenas executar o comando vagrant up dentro da pasta do projeto. Caso queira convergir as ferramentas na sua máquina, sem fazer questão de isolamento da ferramenta, basta criar uma configuração como a seguinte no arquivo nodes.yaml:
local://desktop:
run_list:
- role[workstation]
local://laptop:
run_list:
- role[workstation]Logo após, será necessário apenas executar o comando para convergir os ambiente.
Esta Wiki apresenta os conceitos e procedimentos de implantação das ferramentas de DevOps utilizadas no meu mestrado. Os scripts presentes neste repositório são fortemente baseados no Chef-Solo e escritos utilizados a linguagem Ruby, para maiores informações em relação a sintaxe utilizada neste repositório, recomendo que o leitor se familiarize com essas ferramentas através de suas respectivas documentações. Essa wiki possui os seguintes tópicos.
address: 0.0.0.0
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: true
x509:
clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
cgroupDriver: cgroupfs
cgroupsPerQOS: true
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
configMapAndSecretChangeDetectionStrategy: Watch
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuCFSQuotaPeriod: 100ms
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:
imagefs.available: 15%
memory.available: 100Mi
nodefs.available: 10%
nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: false
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kind: KubeletConfiguration
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeLeaseDurationSeconds: 40
nodeStatusReportFrequency: 1m0s
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
port: 10250
registryBurst: 10
registryPullQPS: 5
resolvConf: /etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
topologyManagerPolicy: none
volumeStatsAggPeriod: 1m0sCurrently, configuration and provisioning of IoT devices must largely be performed manually, making it difficult to quickly react to changes in application or infrastructure requirements. Moreover, we see the emergence of IoT devices (e.g., Intel IoT gateway,3 SmartThings Hub,4 and Raspberry Pi5) that offer functionality beyond basic connected sensors and provide constrained execution environments with limited processing, storage, and memory resources to execute device firmware. These currently unused execution environments can be incorporated in IoT systems to offload parts of the business logic onto devices. In the context of our work, we refer to these devices as IoT gateways.
The scenarios we have decided to simulate belong to a typical application case of the Internet of Things in urban environments: a smart parking infrastructure where in each parking lot a sensor node is deployed to detect the presence or absence of a vehicle [13]. Moreover, there are gateways that take charge of collecting data from the sensor network, and vehicles that move around the city and contact the gateways, searching for an available parking lot. In particular, we have defined that vehicle requests to gateways are scheduled by an Homogeneous Poisson Process with interarrival time being an exponentially distributed random variable whose mean interarrival time is equal to 5 minutes. When a vehicle makes a request, it communicates with the geographically nearest gateway. The number of vehicles looking for a parking lot is chosen so that, at the end of the simulation, 10% of the traveling vehicles have issued a parking request to the gateways. Next, the gateway contacts the parking sensors under its own responsibility and waits for their response. Therafter, it communicates a response to the vehicle. All this happens for a lifetime of the simulation equals to 4 hours...
In order to assess the scalability of our simulation platform, the number of nodes of the simulation has been varied, as reported in Table 1, and the required computation time has been measured. Simulations have been executed on a server with 2 GHz Xeon CPU, 16 GB RAM, Ubuntu GNU/Linux operating system.
Cellular communication technologies can play a crucial role in the development and expansion of loT. Cellular networks can leverage their ubiquity, integrated security, network management and advanced backhaul connectivity capabilities into loT networks. In this regard, capillary networks [5][ 7] aim to provide the capabilities of cellular networks to constrained networks while enabling the connectivity between wireless sensor networks and cellular networks. Hence, a capillary network provides local connectivity to devices using short-range radio access technologies while it connects to the backhaul cellular network through a node called Capillary Gateway (CGW)
In our system, the distributed cloud for loT devices includes both data centers (DCs), where larger amounts of data from different sources can be processed and where also management functions typically reside, as well as local compute infrastructure, in particular within capillary networks. The latter makes it possible to process data locally, for example, to aggregate or filter sensor data, which can reduce the amount of data that is sent upstream towards data centers. Moreover, it enables also low-latency sensor-actuator control loops. In particular, we use containers, such as Docker [18], for packaging, deployment (including updates and new features), and execution of software in our cloud. In general, containers have a low overhead, which, in turn, allows a high density of instances in DCs. Moreover, containers can be executed in more constrained environments without hardware virtualization support, such as in local CGWs.
Research in micro operating systems or microkernels can provide inroads to tackling challenges related to deployment of applications on heterogeneous edge nodes. Given that these nodes do not have substantial resources like in a server, the general purpose computing environment that is facilitated on the edge will need to exhaust fewer resources. The benefits of quick deployment, reduced boot up times and resource isolation are desirable [47]. There is preliminary research suggesting that mobile containers that multiplex device hardware across multiple virtual devices can provide similar performance to native hardware [48]. Container technologies, such as Docker31 are maturing and enable quick deployment of applications on heterogeneous platforms. More research is required to adopt containers as a suitable mechanism for deploying applications on edge nodes
Aggregators (i.e., IoT gateways) are located outside of the cloud and may be deployed on single board computers (e.g., Raspberry PIs), mini computers, or smart devices (e.g., smart phones, TVS or refrigerators) to provide management, connectivity, and data preprocessing for sensors’ data. The main responsibility of IoT middleware is to facilitate such functionalities at aggregators. A worker node of a cluster based data processing platform may be deployed on aggregators (i.e., Cluster worker). An example for this can be a worker node of a Kafka3 cluster that is responsible for in-place data cleaning and aggregation. The Aggregator Autonomic Manager component is a part of the autonomic management system which is responsible for monitoring performance metrics of aggregators and scale related microservices if needs be. IoT Control Services are the control services used by the autonomic manager to scale or reconfigure microservices. IoT Gateway Microservices are the part of the IoT application that implements application functionalities at this layer. For example, in our sample implementation of the IoT application, we deploy Kafka as microservices at Aggregators.
####Container-based Pair-oriented IoT Service Provisioning (CPIS)
According to this approach, management operations are based on direct data exchanges between the involved devices. To enable secure transmissions, all control data trac is exchanged over SSH communications. Indeed, in this paper, we do not focus on scheduling algorithms to choose the best candidate node for executing a specific task. Rather, we aim to investigate on the required procedures to activate a containerized service and to enable the interaction between the device requesting the task (i.e., the client) and the device actually executing the task (i.e., the server) in heterogeneous IoT environments. Fig. 1 shows the basic workflow between a client and a server to achieve container-based IoT service provisioning. First, the client issues the command for task activation on the server, which in turn executes the “containerized” service by leveraging the local Docker Engine. To guarantee the desired interoperability in compliance with current IoT standards, Constrained Application Protocol (CoAP) is recommended to define RESTful application interfaces [34], according to a traditional client-server model. Therefore, after the successful activation of the requested container, interactions between client and server follow the CoAP protocol rules. Once the desired task is completed, the client can also issue the command to stop and remove the container. In this approach, all the control burden is delegated to the client, which has to comprehensively manage the instantiation, monitoring, and removal of the instance. Furthermore, the control requirements can be even increased when interactions with multiple nodes are required, such as in one of the following situations: (i) the IoT application is composed by multiple modules (for example, with dierent sensing requests); (ii) the IoT client may be in charge of keeping backup service, so to guarantee service continuity and reliability even in case of node failure; (iii) the client can need to scale up or down the service instances by accounting for the resources of IoT nodes and the actual workload,. To sum up, this approach can ensure that fast management procedures are achieved through the direct interaction between the cooperating nodes, while all the control features for service lifecycle are implemented in the client.
According to this second approach, a container-oriented Orchestration System provides multiple features to: (i) ease the deployment and monitoring of IoT services over multiple nodes; (ii) perform periodical service checking and resource monitoring; (iii) implement replication and auto-scaling policies. Similarly to the Docker Swarm framework, which we use as a reference platform, we consider two dierent logical nodes: Container-oriented Edge Manager (CEM) and Container-based IoT Worker (CIW), whose features are presented in the remainder of this subsection. An exemplary scenario is sketched in Fig. 2, where a CEM controls a cluster of nodes, operating as CIWs and leveraging the virtualization features provided by Docker Engine to host containerized IoT services. Container-oriented Edge Manager features In our view, a predominant role has to be played by the network access point. This latter operates as a manager of the clustered devices, by both providing network connectivity and orchestrating integrated IoT applications. Indeed, we believe that network providers are in a predominant position to oer new management services to their IoT customers, by leveraging on their capillary infrastructure and on the emerging cloudification of the edge networks [35]. The CEM is responsible to handle several cluster management tasks: Maintaining the cluster state. The CEM maintains an upto- date internal state of the entire swarm, accounting for the available resources oered by each associated device and for all the services running within the cluster. In a densely connected environment, multiple CEMs can be deployed over dierent access points to guarantee fault tolerance features. Indeed, Docker Swarm uses a Raft implementation to maintain a consistent distributed state of the cluster among multiple manager nodes. Service scheduling. When a new service is requested, the CEM needs to select the most appropriate nodes to deploy the containerized applications, by matching service requirements and available workers' resources. Service monitoring. During the whole application lifecycle, the CEM must monitor the status of relevant containers and, if some failures are detected, new instances must be promptly activated, to guarantee the desired Quality of Experience. Distributing container-based application images. To store and distribute Docker images containing all the applications code and dependencies, Docker has introduced public/private Docker registries. The desired flexibility is achieved by enabling the CEM to implement a private registry, to share trusted images among the nodes of the cluster. Container-based IoT Worker features CIWs are devices running instances of Docker Engine whose sole purpose is to create, start, and stop containers. These devices require an Operating System, whose kernelsupports container-based virtualization. Some Bootstrap Code is necessary to activate Docker in Swarm mode, and to automatically join the desired cluster. Once a CIW has joined the cluster, the CEM can deploy containerized applications though the CIW's Docker Engine. If the relevant Docker image is not locally available, then the CIW's Docker Engine can retrieve it through either a public Docker registry, i.e., the Docker Hub, or a private Docker registry running on the CEM. CIWs can also operate as a proxy or gateway node for extremely resource-constrained nodes, which do not support container-based virtualization yet. In [36], gateways features can be deployed on-demand via Docker containers, to provide integration capabilities to sensor/actuation devices. In Fig. 3 an exemplary workflow of CEMC approach is sketched. After the CEM receives a service requests, it performs the relevant scheduling operation by identifying the device which can better host the requested applications in the cluster. Then, the CEM issues a command to launch the containerized task in the selected CIW. When the container is running, the client can send CoAP task request to the containerized CoAP server running on the CIW, which performs the desired logic operations providing the output in a proper CoAP response packet.
In the above situations, it starts to be recognized that at least an additional layer , e. g., composed by gat eway nodes relatively local to sensors/actuators , can significant enrich the flexibility and suitability of the IoT cloud architecture. This intermediate layer may provide and support in : scalability via more distributed and localized processing/state main tenance; data aggregation, with gateways working as data sink s for devices working with high sample rate; interoperability at network edges by overcoming the possible heterogene ity due to the large variety in integrated sensors and actuators ; complementing resource constrained IoT devices, usually with few storage /computing/communication resources and limited energy power; dynamic and efficient registration discovery of IoT devices. It is manifest that t he effective and efficient integration of IoT and the cloud is technically challenging and represents a very hot research topic nowadays . Starting from seminal approaches of proxy intermediation/optimization even before the cloud introduction 4 , 5, 6 ], s everal more recent related papers propos e , with slightl y different flavors, three layer IoT cloud architectures with the above intermediate layer of resources at network edge s , by naming their approach as f og computing [ 7 8 ], cloudlet 9 ], edge computing [ 10 ], or follow me cloud [ 11 Below we will use the term fog computing in a general way to indicate any three layer IoT cloud architecture with an intermediate layer of geographically distributed gateway nodes, typically well positioned at the edges of network localities that are densely populated by IoT sensor s and actuators.
We claim the suitability and effectiveness of introducing a highly manageable and interoperable way to create fog nodes on-the-fly via the adoption of containerization techniques, whose advantages are deemed prevalent to disadvantages also in the case of IoT gateways with limited resource availability. In particular, this section presents how we have enhanced our fog computing middleware via dynamic IoT gateway configuration through i) the creation of standard gateway base configuration; ii) the creation of container-based (typically small and atomic) applications/services, each with very specific functions; and iii) the dynamic orchestration of fog middleware services by the global cloud, with the possibility to install, replace, or extend the currently installed configurations and available middleware services.
O Sonarqube depende da instalação do Java JDK, essa instalação pode se dar nas versões OpenJDK e do Oracle JDK. A versão instalada neste repositório é
JDK >= 8.0O Sonarqube tem a dependência opcional de um servidor de banco de banco de dados, este servidor é necessário para migração e backup dos dados, visto que o banco de dados H2, padrão do Sonar, não permite essas operações.
Postgresql
MySQL (Não Recomendado)
Oracle
Microsoft SQL ServerA receita presente neste repositório realiza a instalação do PostgreSQL Server, além criar e mapear o usuário do sistema, sonar, ao banco de dados sonar_db. o Sonarqube depende das configurações presentes abaixo para se conectar corretamente ao banco de dados. Essas configurações estão presentes no arquivo cookbooks/sonarqube/attributes/default.rb.
default['sonarqube']['jdbc']['username'] = 'sonar'
default['sonarqube']['jdbc']['password'] = 'sonar'
default['sonarqube']['jdbc']['url'] = 'jdbc:postgresql://localhost:5432/sonar_db'O Sonarqube, por padrão, expoe a porta 9000 para o protocolo TCP. Para alterar esta configuração é necessário alterar os arquivos:
Em cookbooks/sonarqube/attributes/default.rb:
default['sonarqube']['web']['port'] = 9000Em cookbooks/firewalld/recipes/default.rb:
firewalld_port '9000/tcp' do
action :add
zone 'public'
endPara convergir o sonarqube basta executar o comando rake converge:sonarqube-server, caso seja necessário escolher o ambiente de em que o sonar vai ser instalado é necessário passar a variável CHAKE_ENV=<ambiente>.
Execução dos testes de desempenho usando o protocola AMQP juntamente com o RabbitMQ em um cluster do Kubernetes.
kind: Service
apiVersion: v1
metadata:
namespace: test-rabbitmq
name: rabbitmq
labels:
app: rabbitmq
type: LoadBalancer
spec:
type: NodePort
externalIPs:
- 192.168.25.2
ports:
- name: http
protocol: TCP
port: 15672
targetPort: 15672
- name: amqp
protocol: TCP
port: 5672
targetPort: 5672
- name: mqtt
protocol: TCP
port: 1883
targetPort: 1883
selector:
app: rabbitmq
---
apiVersion: v1
kind: ConfigMap
metadata:
name: rabbitmq-config
namespace: test-rabbitmq
data:
enabled_plugins: |
[rabbitmq_management,rabbitmq_peer_discovery_k8s, rabbitmq_mqtt].
rabbitmq.conf: |
## Cluster formation. See https://www.rabbitmq.com/cluster-formation.html to learn more.
cluster_formation.peer_discovery_backend = rabbit_peer_discovery_k8s
cluster_formation.k8s.host = kubernetes.default.svc.cluster.local
## Should RabbitMQ node name be computed from the pod's hostname or IP address?
## IP addresses are not stable, so using [stable] hostnames is recommended when possible.
## Set to "hostname" to use pod hostnames.
## When this value is changed, so should the variable used to set the RABBITMQ_NODENAME
## environment variable.
cluster_formation.k8s.address_type = hostname
## How often should node cleanup checks run?
cluster_formation.node_cleanup.interval = 30
## Set to false if automatic removal of unknown/absent nodes
## is desired. This can be dangerous, see
## * https://www.rabbitmq.com/cluster-formation.html#node-health-checks-and-cleanup
## * https://groups.google.com/forum/#!msg/rabbitmq-users/wuOfzEywHXo/k8z_HWIkBgAJ
cluster_formation.node_cleanup.only_log_warning = true
cluster_partition_handling = autoheal
## See https://www.rabbitmq.com/ha.html#master-migration-data-locality
queue_master_locator=min-masters
## This is just an example.
## This enables remote access for the default user with well known credentials.
## Consider deleting the default user and creating a separate user with a set of generated
## credentials instead.
## Learn more at https://www.rabbitmq.com/access-control.html#loopback-users
loopback_users.guest = false
mqtt.listeners.tcp.default = 1883
mqtt.default_user = rabbit
mqtt.default_pass = s3kRe7
mqtt.vhost = /
mqtt.exchange = amq.topic
# 24 hours by default
mqtt.subscription_ttl = 86400000
mqtt.prefetch = 10
---
apiVersion: apps/v1
# See the Prerequisites section of https://www.rabbitmq.com/cluster-formation.html#peer-discovery-k8s.
kind: StatefulSet
metadata:
name: rabbitmq
namespace: test-rabbitmq
spec:
selector:
matchLabels:
app: rabbitmq
serviceName: rabbitmq
# Three nodes is the recommended minimum. Some features may require a majority of nodes
# to be available.
replicas: 1
template:
metadata:
labels:
app: rabbitmq
spec:
serviceAccountName: rabbitmq
terminationGracePeriodSeconds: 10
nodeSelector:
# Use Linux nodes in a mixed OS kubernetes cluster.
# Learn more at https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#kubernetes-io-os
kubernetes.io/hostname: bahamut
containers:
- name: rabbitmq-k8s
image: rabbitmq:3.8
volumeMounts:
- name: config-volume
mountPath: /etc/rabbitmq
# Learn more about what ports various protocols use
# at https://www.rabbitmq.com/networking.html#ports
ports:
- name: http
protocol: TCP
containerPort: 15672
- name: amqp
protocol: TCP
containerPort: 5672
livenessProbe:
exec:
# This is just an example. There is no "one true health check" but rather
# several rabbitmq-diagnostics commands that can be combined to form increasingly comprehensive
# and intrusive health checks.
# Learn more at https://www.rabbitmq.com/monitoring.html#health-checks.
#
# Stage 2 check:
command: ["rabbitmq-diagnostics", "status"]
initialDelaySeconds: 60
# See https://www.rabbitmq.com/monitoring.html for monitoring frequency recommendations.
periodSeconds: 60
timeoutSeconds: 15
readinessProbe:
exec:
# This is just an example. There is no "one true health check" but rather
# several rabbitmq-diagnostics commands that can be combined to form increasingly comprehensive
# and intrusive health checks.
# Learn more at https://www.rabbitmq.com/monitoring.html#health-checks.
#
# Stage 2 check:
command: ["rabbitmq-diagnostics", "status"]
# To use a stage 4 check:
# command: ["rabbitmq-diagnostics", "check_port_connectivity"]
initialDelaySeconds: 20
periodSeconds: 60
timeoutSeconds: 10
imagePullPolicy: Always
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: RABBITMQ_USE_LONGNAME
value: "true"
# See a note on cluster_formation.k8s.address_type in the config file section
- name: K8S_SERVICE_NAME
value: rabbitmq
- name: RABBITMQ_NODENAME
value: rabbit@$(MY_POD_NAME).$(K8S_SERVICE_NAME).$(MY_POD_NAMESPACE).svc.cluster.local
- name: K8S_HOSTNAME_SUFFIX
value: .$(K8S_SERVICE_NAME).$(MY_POD_NAMESPACE).svc.cluster.local
- name: RABBITMQ_ERLANG_COOKIE
value: "mycookie"
volumes:
- name: config-volume
configMap:
name: rabbitmq-config
items:
- key: rabbitmq.conf
path: rabbitmq.conf
- key: enabled_plugins
path: enabled_plugins4d a9 71 00 b5 69 11 f8 59 50 ec 3a 73 ba 37 d1
d8 a0 fe 92 3a 9a fa 09 05 d2 da 17 36 41 a4 3d
74 c8 98 9d ed e3 55 5a a9 97 5d 73 96 25 dd b6
95 1c d4 e5 01 b8 f8 98 8d 53 82 e2 ba c3 48 d2
ed 9b c3 4a 3b 2f 0e b8 09 f3 1e fd 6e af 43 f7
c7 ff d1 3f 70 85 ea 4b 68 c8 38 31 0b 5b 08 00
ad fb 53 df b8 00 4f 82 0f bd 51 fb 7d 69 15 a3
11 40 f8 ec c1 5e c7 4b 52 5d 24 93 41 2f 7b 17
b9 5f 98 f5 d1 bd 1c 8b fe 23 d8 cf 53 ff 74 2f
c9 14 1c cb e8 82 95 00 e2 5e 7b af 84 1e 25 6c
7e 2b f2 72 38 68 87 8e d9 2e 89 7e f9 ea 55 49
55 f2 50 f3 ce bd ad b4 5c e5 03 5a ab dd 54 45
74 13 80 0f 67 3f 09 3d c5 25 1c 82 e6 98 e8 aa
bf 94 3e c2 5b c8 81 76 a5 68 04 4e 66 5f 14 53
09 3e 59 b2 89 b9 3f cb dc 40 36 e0 3d 67 c6 10
ea 49 8b de a3 ac e9 15 bb 3a 0f 5c 78 90 ad 9a
e1 1c 5c 08 93 b0 5f 3f 0d a5 bf 08 5b ce 70 50
fd 61 71 d7 5c 1d 0f 2b bd 4f 7e 6a da e8 72 27
ea 7e 69 3c 55 66 8a 12 91 6d 8c 69 11 94 60 2e
96 93 2f 8f 49 dc d2 21 36 e2 5a c3 fb ed f1 a9
f8 38 e7 b1 1a 2d 3b 4e c8 66 aa ce bd 1e da 60
ca 9f af 0b 04 15 4e 55 3c 33 9a 0e 76 dc 3b 9c
77 17 d6 46 1c 29 50 15 d8 3a ed 97 cd ef 0e 25
d2 f8 89 8a a6 5d 38 98 2a 48 e2 d8 ef 3c 7a e8
da 13 9c eb 0a 56 0a 6b 55 0b 45 7c 5e 5e c7 c9
c7 6d aa ea fb bb eb 47 66 75 f2 30 74 f0 7a fa
fe 0e a7 20 da a8 9d 8a a4 10 e7 47 0b 46 ac 5f
2f 84 da 3e 5a 54 00 31 6b e4 16 3f d2 f1 a3 29
00 52 75 fc d4 9f ce ae ea 9e 59 1f 70 e7 21 46
c4 a8 9a 6f 3d d6 20 af 6a b6 44 3c 0d a4 46 88
32 42 b7 fa ad 0d 75 33 15 03 40 45 dc 7e 3b 89
70 31 fd 99 f5 d6 72 4e 8a 5d 71 bb 73 65 e7 55
7c b9 da a8 51 4f 62 70 6f eb f5 62 67 f2 2b c4
5f 8c 7d d0 65 b4 cd fc bd e2 e7 83 c4 ec c6 e3
67 1c 04 17 b4 ac 5f 51 1e dd 09 78 27 82 8e 30
63 8f 0c 19 1e 86 86 b6 2f d1 47 7b 1b 4e 6f c4
99 e5 33 56 cd 3b af a1 87 f6 80 66 0a d3 2b 76
d8 5f 0c 10 c1 b4 bf 06 48 2a 3a 3b c3 96 3e 80
d2 8e 88 58 8e 5c 68 78 79 ef 30 73 d9 49 46 24
00 55 b7 0d 5a 44 12 1c 4f 8d 98 60 a9 f9 4b 39
28 e9 f8 7c 19 a4 b2 70 01 49 25 5a 04 f5 91 52
05 a4 15 8e bf ac ca 15 79 cc 3b 4f 60 da c9 a1
f4 3c ba 4d 6d 27 e4 de 06 0c 50 41 ce 3e 45 1f
f1 39 c3 24 dc a6 76 cf c3 bd 25 c3 2a db f8 89
f4 2d 8b 4a 92 5c 25 8d b1 4a f0 46 67 05 6b c6
3d c0 b6 17 6f 41 b7 fe 64 70 26 2d 05 9d bf 7c
72 8a a8 49 37 fa 37 df 0f b8 8f cd 61 d3 6d db
1a c0 4d d2 f9 ef 5f f2 a2 17 28 48 df 25 d5 77
a0 e7 ff 71 68 bb f4 25 68 4a aa a0 26 72 31 78
0d bb e2 f6 07 44 e4 45 1c 63 46 f6 91 e3 3d 8b
d7 45 e5 62 6a 2e 5a c0 a5 72 41 6b d4 f5 5f e3
f9 4e 2a 07 5c 71 ee 2c ed 8a 23 f6 16 ac 70 73
74 5b e4 72 5b 87 ab e8 59 46 99 a7 42 81 58 56
80 ad
Execução dos testes de desempenho usando o protocola AMQP juntamente com o RabbitMQ em um cluster do Kubernetes.
kind: Service
apiVersion: v1
metadata:
namespace: test-rabbitmq
name: rabbitmq
labels:
app: rabbitmq
type: LoadBalancer
spec:
type: NodePort
externalIPs:
- 192.168.25.2
ports:
- name: http
protocol: TCP
port: 15672
targetPort: 15672
- name: amqp
protocol: TCP
port: 5672
targetPort: 5672
- name: mqtt
protocol: TCP
port: 1883
targetPort: 1883
selector:
app: rabbitmq
---
apiVersion: v1
kind: ConfigMap
metadata:
name: rabbitmq-config
namespace: test-rabbitmq
data:
enabled_plugins: |
[rabbitmq_management,rabbitmq_peer_discovery_k8s, rabbitmq_mqtt].
rabbitmq.conf: |
## Cluster formation. See https://www.rabbitmq.com/cluster-formation.html to learn more.
cluster_formation.peer_discovery_backend = rabbit_peer_discovery_k8s
cluster_formation.k8s.host = kubernetes.default.svc.cluster.local
## Should RabbitMQ node name be computed from the pod's hostname or IP address?
## IP addresses are not stable, so using [stable] hostnames is recommended when possible.
## Set to "hostname" to use pod hostnames.
## When this value is changed, so should the variable used to set the RABBITMQ_NODENAME
## environment variable.
cluster_formation.k8s.address_type = hostname
## How often should node cleanup checks run?
cluster_formation.node_cleanup.interval = 30
## Set to false if automatic removal of unknown/absent nodes
## is desired. This can be dangerous, see
## * https://www.rabbitmq.com/cluster-formation.html#node-health-checks-and-cleanup
## * https://groups.google.com/forum/#!msg/rabbitmq-users/wuOfzEywHXo/k8z_HWIkBgAJ
cluster_formation.node_cleanup.only_log_warning = true
cluster_partition_handling = autoheal
## See https://www.rabbitmq.com/ha.html#master-migration-data-locality
queue_master_locator=min-masters
## This is just an example.
## This enables remote access for the default user with well known credentials.
## Consider deleting the default user and creating a separate user with a set of generated
## credentials instead.
## Learn more at https://www.rabbitmq.com/access-control.html#loopback-users
loopback_users.guest = false
mqtt.listeners.tcp.default = 1883
mqtt.default_user = rabbit
mqtt.default_pass = s3kRe7
mqtt.vhost = /
mqtt.exchange = amq.topic
# 24 hours by default
mqtt.subscription_ttl = 86400000
mqtt.prefetch = 10
---
apiVersion: apps/v1
# See the Prerequisites section of https://www.rabbitmq.com/cluster-formation.html#peer-discovery-k8s.
kind: StatefulSet
metadata:
name: rabbitmq
namespace: test-rabbitmq
spec:
selector:
matchLabels:
app: rabbitmq
serviceName: rabbitmq
# Three nodes is the recommended minimum. Some features may require a majority of nodes
# to be available.
replicas: 1
template:
metadata:
labels:
app: rabbitmq
spec:
serviceAccountName: rabbitmq
terminationGracePeriodSeconds: 10
nodeSelector:
# Use Linux nodes in a mixed OS kubernetes cluster.
# Learn more at https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#kubernetes-io-os
kubernetes.io/hostname: bahamut
containers:
- name: rabbitmq-k8s
image: rabbitmq:3.8
volumeMounts:
- name: config-volume
mountPath: /etc/rabbitmq
# Learn more about what ports various protocols use
# at https://www.rabbitmq.com/networking.html#ports
ports:
- name: http
protocol: TCP
containerPort: 15672
- name: amqp
protocol: TCP
containerPort: 5672
livenessProbe:
exec:
# This is just an example. There is no "one true health check" but rather
# several rabbitmq-diagnostics commands that can be combined to form increasingly comprehensive
# and intrusive health checks.
# Learn more at https://www.rabbitmq.com/monitoring.html#health-checks.
#
# Stage 2 check:
command: ["rabbitmq-diagnostics", "status"]
initialDelaySeconds: 60
# See https://www.rabbitmq.com/monitoring.html for monitoring frequency recommendations.
periodSeconds: 60
timeoutSeconds: 15
readinessProbe:
exec:
# This is just an example. There is no "one true health check" but rather
# several rabbitmq-diagnostics commands that can be combined to form increasingly comprehensive
# and intrusive health checks.
# Learn more at https://www.rabbitmq.com/monitoring.html#health-checks.
#
# Stage 2 check:
command: ["rabbitmq-diagnostics", "status"]
# To use a stage 4 check:
# command: ["rabbitmq-diagnostics", "check_port_connectivity"]
initialDelaySeconds: 20
periodSeconds: 60
timeoutSeconds: 10
imagePullPolicy: Always
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: RABBITMQ_USE_LONGNAME
value: "true"
# See a note on cluster_formation.k8s.address_type in the config file section
- name: K8S_SERVICE_NAME
value: rabbitmq
- name: RABBITMQ_NODENAME
value: rabbit@$(MY_POD_NAME).$(K8S_SERVICE_NAME).$(MY_POD_NAMESPACE).svc.cluster.local
- name: K8S_HOSTNAME_SUFFIX
value: .$(K8S_SERVICE_NAME).$(MY_POD_NAMESPACE).svc.cluster.local
- name: RABBITMQ_ERLANG_COOKIE
value: "mycookie"
volumes:
- name: config-volume
configMap:
name: rabbitmq-config
items:
- key: rabbitmq.conf
path: rabbitmq.conf
- key: enabled_plugins
path: enabled_plugins[
{
'name': 'publish_consume_low',
'type': 'simple',
'uri': 'amqp://guest:guest@10.105.244.74:5672',
'params': [{
'time-limit': 180,
'producer-count': 10,
'consumer-count': 10
}]
}
][
{
'name': 'publish_consume_medium',
'type': 'simple',
'uri': 'amqp://guest:guest@10.105.244.74:5672',
'params': [{
'time-limit': 180,
'producer-count': 100,
'consumer-count': 100
}]
}
][
{
'name': 'message-sizes-large',
'type': 'varying',
'uri': 'amqp://guest:guest@10.105.244.74:5672',
'params': [{
'time-limit': 30
}],
'variables': [{
'name': 'min-msg-size',
'values': [
5000,
10000,
50000,
100000,
500000,
1000000
]
}]
},
{
'name': 'rate-vs-latency',
'type': 'rate-vs-latency',
'uri': 'amqp://guest:guest@10.105.244.74:5672',
'params': [{
'time-limit': 30
}]
}
]O JMeter oferece a possibilidade de utilizar a ferramenta como CLI ou GUI para a realização dos testes de desempenho, podendo depois exportar os dados em diversos formatos (XML, CSV).
O grupo Kubernauts disponiblizou abertamente um operador para realizar testes de Carga no Kubernetes utilizando o JMeter. Essa configuração utiliza o InfluxDB para criar uma série temporal dos dados que são gerados por Pods do JMeter nos nós do Kubernetes, a série temporal dos dados então é lida por um Pod do Grafana para apresentar os dados em uma interface mais amigável.